Markov Matrices

Example: A Markov matrix given by \begin{align*} A=\left[\begin{array}{cc} 0.6 & 0.3\\ 0.4 & 0.7 \end{array}\right], \end{align*}

The sum of the first column:

The sum of the second column:

Example: A man always eats lunch at one of two restaurants, A and B. He never eats at A twice in a row. However, if he eats at B, he is three times as likely to eat at B next time as at A. Initially, he is equally likely to eat at either restaurant.

We have $$ X_{n+1} = PX_n,\quad n=1,2,3,\ldots. $$

We can now successively compute $X_1$, $X_2$, $\ldots$

For a large $n$

Example: Consider the migration matrix $$A=\left[\begin{array}{ccc} 0.5 & 0.2 & 0.4\\ 0.3 & 0.4 & 0\\ 0.2 & 0.4 & 0.6 \end{array}\right]$$ for locations 1,2, and 3. Suppose initially there are 150 residents in location 1 , 150 in location 2 and 200 in location 3 . Find the population in the three locations after 1,2, 3, 4 and 5 units of time.

For a large $n$

Example:

{The values 40\%, 40\% and 20\% are transition probabilities, and are assumed to be known.}

Suppose that it is rainy on Thursday. What is the probability that it will be sunny on Sunday?

We put the transition probabilities into a {transition matrix},

The initial state vector, $X_0$, corresponds to the state of the weather on Thursday (it is rainy); therefore, \begin{align*} X_0 = \left[\begin{array}{l} 0 \\ 0 \\ 1 \end{array}\right] \begin{array}{c} \mbox{S} \\ \mbox{C} \\ \mbox{R} \end{array} \end{align*}

The state vector for Friday:

The state vector for Saturday:

The state vector for Sunday:

The state vector for Monday:

Example:

In the previous example, suppose that it is cloudy on Wednesday. What is the probability that it will be rainy on Friday?

The initial state vector, $X_0$, corresponds to the state of the weather on Wednesday (it is cloudy); therefore, \begin{align*}X_0 =\left[\begin{array}{l}0 \\ 1 \\ 0\end{array}\right] \begin{array}{c}S \\ C \\ R \end{array} \end{align*}

The state vector for Thursday:

The state vector for Friday:

Example: A person sets off on a random walk with three possible locations. The Markov matrix of probabilities $A = [a_{i,j}]$ is given by \begin{align*} A=\left[\begin{array}{ccc} 0.1 & 0.3 & 0.5\\ 0.1 & 0.3 & 0.2\\ 0.8 & 0.4 & 0.3 \end{array}\right]. \end{align*} If the walker starts in location 1, what is the probability of ending back in location 3 at time $n = 3$?

We need to find $x_{{\color{blue}m}{\color{red}n}}$ with ${\color{blue}m=3}$ and ${\color{red}n=3}$. Since the walker begins in location 1, we have \begin{align*} X_{{\color{red}0}}=\begin{bmatrix}x_{{\color{blue}1}{\color{red}0}} \\x_{{\color{blue}2}{\color{red}0}} \\x_{{\color{blue}3}{\color{red}0}}\end{bmatrix} =\left[\begin{array}{c} 1\\ 0\\ 0 \end{array}\right]\begin{array}{c} \leftarrow~\text{location 1}\\ \leftarrow~\text{location 2}\\ \leftarrow~\text{location 3} \end{array}. \end{align*} The goal is to calculate $x_{22}$ . To do this we calculate $X_2$ , using $X_{n+1} = AX_n$

Example: In the previous example, in case that the walker starts in location 3, what is the probability of ending back in location 3 at time $n = 2$?

We need to find $x_{{\color{blue}m}{\color{red}n}}$ with ${\color{blue}m=3}$ and ${\color{red}n=2}$. Since the walker begins in location 3, we have \begin{align*} X_{{\color{red}0}}=\begin{bmatrix}x_{{\color{blue}1}{\color{red}0}} \\x_{{\color{blue}2}{\color{red}0}} \\x_{{\color{blue}3}{\color{red}0}}\end{bmatrix} =\left[\begin{array}{c} 0\\ 0\\ 1 \end{array}\right] \begin{array}{c} \leftarrow~\text{location 1}\\ \leftarrow~\text{location 2}\\ \leftarrow~\text{location 3} \end{array}. \end{align*} The goal is to calculate $x_{22}$ . To do this we calculate $X_2$ , using $X_{n+1} = AX_n$

Example: Consider the migration matrix \begin{align*} A =\left[\begin{array}{ccc} 0.6 & 0 & 0.1\\ 0.2 & 0.8 & 0\\ 0.2 & 0.2 & 0.9 \end{array}\right] \end{align*} for locations $1$,$2,$ and $3$. Suppose initially there are $100$ residents in location $1$, $200$ in location $2$ and $400$ in location $3$. Find the population in the three locations after a long time.

To find this steady state vector, we have, \begin{align*} X_s = AX_s \Rightarrow (I-A)X_s=0 \quad \Rightarrow \quad

\left(\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\end{bmatrix}

\begin{bmatrix}0.6 & 0 & 0.1\\0.2 & 0.8 & 0\\0.2 & 0.2 & 0.9\end{bmatrix}

\right)X_s= \begin{bmatrix}0\\0\\0\end{bmatrix}, \quad \Rightarrow \quad \left[\begin{array}{rrr} 0.4 & 0 & -0.1\\ -0.2 & 0.2 & 0\\ -0.2 & -0.2 & 0.1 \end{array}\right]X_s= \begin{bmatrix}0\\0\\0\end{bmatrix}. \end{align} The augmented form of this linear system: $$\left[\begin{array}{rrr|c} 0.4 & 0 & -0.1 & 0\\ -0.2 & 0.2 & 0 & 0\\ -0.2 & -0.2 & 0.1 & 0 \end{array}\right],$$ and in RREF $$\left[\begin{array}{ccc|c} {1.0} & 0 & -0.25 & 0\\ 0 & {1.0} & -0.25 & 0\\ 0 & 0 & 0 & 0 \end{array}\right].$$ Therefore, general solution \begin{align} X_s= t\begin{bmatrix}0.25 \\ 0.25 \\ 1\end{bmatrix}. \end{align*} The initial vector $X_0$ is given by $\begin{bmatrix}100 \\ 200 \\ 400\end{bmatrix}.$

We know that $X_s$ has positive entries which have the same sum as the entries of $X_0$. Therefore, \begin{align*} \text{Sum of entries of }X_s&=\text{Sum of entries of }X_0\\ 0.25t+0.25t+ t&=100+200+400 \end{align*} Solving for $t$, we get $t=\dfrac{1400}{3}$. Therefore the population in the long run is given by $$\frac{1400}{3}\begin{bmatrix}0.25 \\ 0.25 \\ 1\end{bmatrix}= \begin{bmatrix}\frac{350}{3}\\ \frac{350}{3}\\ \frac{1400}{3}\end{bmatrix}.$$ Since we are speaking about populations, we would need to round these numbers to provide a logical answer. The steady state vector $X_s$ is given by $$X_s= \begin{bmatrix}117\\ 117\\ 467\end{bmatrix}.$$

Example A man eats one of three soups—beef, chicken, and vegetable—each day. He never eats the same soup two days in a row. If he eats beef soup on a certain day, he is equally likely to eat each of the others the next day; if he does not eat beef soup, he is twice as likely to eat it the next day as the alternative.

The states here are $B$, $C$, and $V$, the three soups. The transition matrix $A$ is given in the table. (Recall that, for each state, the corresponding column lists the probabilities for the next state.)

BCV
B02/32/3
C1/201/3
V1/21/30
If he has beef soup initially, then the initial state vector is \begin{align*} X_0 = \left[\begin{array}{l} 1 \\ 0 \\ 0 \end{array}\right] \begin{array}{c}B \\ C \\ V \end{array} \end{align*} Then two days later the state vector is $X_2$ . If $A$ is the transition matrix, then \begin{align*} X_1=AX_0= \left[\begin{array}{ccc} 0 & \frac{2}{3} & \frac{2}{3}\\ \frac{1}{2} & 0 & \frac{1}{3}\\ \frac{1}{2} & \frac{1}{3} & 0 \end{array}\right] \left[\begin{array}{c} 1\\ 0\\ 0 \end{array}\right]=\left[\begin{array}{c} 0\\ \frac{1}{2}\\ \frac{1}{2} \end{array}\right],\\ X_2=AX_1=AAX_0=A^2X_0= \left[\begin{array}{ccc} 0 & \frac{2}{3} & \frac{2}{3}\\ \frac{1}{2} & 0 & \frac{1}{3}\\ \frac{1}{2} & \frac{1}{3} & 0 \end{array}\right] \left[\begin{array}{c} 0\\ \frac{1}{2}\\ \frac{1}{2} \end{array}\right]=\left[\begin{array}{c} \frac{2}{3}\\ \frac{1}{6}\\ \frac{1}{6} \end{array}\right] \begin{array}{c}B \\ C \\ V \end{array} \end{align*} so he eats beef soup two days later with probability $\dfrac{2}{3}$.

Moreover, this also shows that he eats chicken and vegetable soup each with probability $\dfrac{1}{6}$.

To find the long-run probabilities, we must find the steady-state vector $X_s$.

\begin{align*} X_s = AX_s \Rightarrow (I-A)X_s=0 \quad \Rightarrow \quad & \left(\begin{bmatrix}1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1\end{bmatrix} - \left[\begin{array}{ccc} 0 & \frac{2}{3} & \frac{2}{3}\\ \frac{1}{2} & 0 & \frac{1}{3}\\ \frac{1}{2} & \frac{1}{3} & 0 \end{array}\right] \right)X_s= \begin{bmatrix}0\\0\\0\end{bmatrix}, \\ \quad \Rightarrow \quad & \left[\begin{array}{ccc|c} 1 & -\frac{2}{3} & -\frac{2}{3} & 0\\ -\frac{1}{2} & 1 & -\frac{1}{3} & 0\\ -\frac{1}{2} & -\frac{1}{3} & 1 & 0 \end{array}\right]X_s= \begin{bmatrix}0\\0\\0\end{bmatrix}. \end{align*}

The general solution: \begin{align*} X_s=t\left[\begin{array}{c} \frac{4}{3}\\ 1\\ 1 \end{array}\right]. \end{align*} We know from theorem 7.12 that $X_s$ has positive entries which have the same sum as the entries of $X_0$. Therefore, \begin{align*} \frac{4\,t}{3}+ t+ t=1+0+0. \quad \Rightarrow \quad \frac{10\,t}{3}=1 \quad \Rightarrow \quad t=\frac{3}{10}=0.3 \end{align*} Therefore, the steady state vector $X_s$ is given by \begin{align*} X_s= (0.3)\left[\begin{array}{c} \frac{4}{3}\\ 1\\ 1 \end{array}\right]= \left[\begin{array}{c} 0.4\\ 0.3\\ 0.3 \end{array}\right] \begin{array}{c} B \\ C \\ V \end{array}. \end{align*} Hence, in the long run, he eats beef soup 40\% of the time and eats chicken soup and vegetable soup each 30\% of the time.

We have $$ X_{n+1} = PX_n,\quad n=1,2,3,\ldots. $$

We can now successively compute $X_1$, $X_2$, $\ldots$